The evidence synthesis pipeline

 

Extracting and calculating effects

What is an effect size?

To synthesise a diverse evidence base in the context of a meta-analysis, we need to quantify the outcomes of individual studies in a common numerical currency.

 
 

Effect sizes are that currency.

What is an effect size?

An effect size is a number describing a quantity or relationship. It describes strength and (often) direction of an outcome.

 
 

May be descriptive if it’s a sample-based estimate of a quantity. Or comparative if it estimates the relationship between two variables in a population, or an estimate of a quantity.

 
 

Also known as a outcome measure, treatments effect, or effect depending on the field.

 

Selecting effects

What are we trying to measure?

Three common cases

What are we trying to measure?

1: A difference in the means of groups.

What are we trying to measure?

2: A difference in binary outcomes between groups.

What are we trying to measure?

3: A continuous association between groups.

Common effect size measures

  1. A difference in the means of groups.
    • Standardized mean difference (Hedge’s d or g)
    • Raw (unstandardized) mean difference (D)
    • Response ratios (RR)
  2. A difference in binary outcomes between groups.
    • Odds ratio (OR) <-> (Log odds ratio)
    • Risk ratio (RR)
    • Risk difference (RD)
  3. A continuous association between groups.
    • Correlation coefficient (Pearson’s r) <-> (Fisher’s z)

N.B Samples are always only samples, so measures always have error

Common effect size measures

  1. A difference in the means of groups.
    • Standardized mean difference (Hedge’s d or g)
    • Raw (unstandardized) mean difference (D)
    • Response ratios (RR)
  2. A difference in binary outcomes between groups.
    • Odds ratio (OR) <-> (Log odds ratio)
    • Risk ratio (RR)
    • Risk difference (RD)
  3. A continuous association between groups.
    • Correlation coefficient (Pearson’s r) <-> (Fisher’s z)

The Standardized Mean Difference

Useful when quantifying a difference in the means of groups.

The Standardized Mean Difference

\(\Large g = \frac{\bar{Y}_1 - \bar{Y}_2}{S_{pooled}}J\)

 
 

Seeks to capture the difference between the two distributions, and how each represents a distinct cluster of scores, even if they do not measure exactly the same outcome

 
 

Measures the distance between two distributions in units of pooled standard-deviations

The Standardized Mean Difference

How does the addition of nitrogen-rich fertilizer affect the yield of wheat?

      dat_yield %>%
        group_by(treatment) %>% 
        summarise(mean = mean(yield),
                  sd = sd(yield),
                  n = n())
## # A tibble: 2 × 4
##   treatment               mean    sd     n
##   <chr>                  <dbl> <dbl> <int>
## 1 control                 242.  9.72    65
## 2 nitrogenous_fertilizer  259.  9.73    65

The Standardized Mean Difference

How does the addition of nitrogen-rich fertilizer affect the yield of wheat?

\(\Large g = 1.74\)

The Standardized Mean Difference

How does the addition of nitrogen-rich fertilizer affect the yield of wheat?

\(\Large g = 1.74\)

The Standardized Mean Difference

How does the addition of nitrogen-rich fertilizer affect the yield of wheat?

 

\(\Large g = 1.74\)

 

Interpretation. ~ish. Not really. (Cohen 1988):

  • 0.2 = ‘small’
  • 0.5 = ‘medium’
  • 0.8 = ‘large’

The Odds Ratio

For measuring a difference in binary outcomes between groups.

The Odds Ratio

For measuring a difference in binary outcomes between groups.

  • The odds of an event are the number of events that produce an outcome divided by the number of events that do not. Or is the probability of ‘success’ divided by the probability of ‘failure’.
\(O = \frac{1}{6-1} = 0.2\)
  • The odds ratio estimates the odds of an event happening in one group relative to the odds of the same event happening in the other group.

The Odds Ratio

How effective is pesticide X?

 

 

\(\Large OR = \frac{29/21}{8/42} = \frac{1.38}{0.19} = 7.25\)

The Odds Ratio

How effective is pesticide X?

 

\(\Large OR = \frac{29/21}{8/42} = \frac{1.38}{0.19} = 7.25\)

 

Insects in patches that received a pesticide treatment had 7.25 times the odds of dying than those in the pesticide-free control.

 

N.B. No mention of “risk,” “likely/likelihood,” or “probability”

The Odds Ratio

How effective is pesticide X?

 

\(\Large OR = \frac{29/21}{8/42} = 7.25\)

 

Interpretation:

  • 0 \(\leq\) OR \(\geq\) \(\infty\)
  • OR > 1: greater odds of B occurring in the presence of A.
  • OR = 1: equal odds (i.e. no association). Odds of A occurring are unrelated to odds of B occurring.
  • OR 0 - 1: reduced odds of B occurring in the presence of A.

The Correlation Coefficient

For quantifying the continuous association between groups.

The Correlation Coefficient

Pearson’s r

It’s the covariance between two variables, normalised by their standard deviation(s)

\(\Large r = \frac{COV(X, Y)}{\sigma_X \sigma_Y} = \frac{0.8}{1 * 1} = 0.8\)

Interpretation:

  • -1 \(\leq\) r \(\geq\) 1
  • r = 1: strong positive association between A and B. As A increases, B increases.
  • r = 0: no association between A and B.
  • r = -1: strong negative association between A and B. As A increases, B decreases.

 

Extracting effects

Extracting effects

From best to worst-case:

  • Raw data are provided
    • Calculate directly yourself
  • Rich summary data are provided or can be estimated (e.g. means, errors, sample sizes)
    • Calculate directly yourself
  • Raw data are plotted
    • Snatch the data from the plot(s), then calculate yourself
  • Effects are reported
    • Look at them, write them down
  • Summary statistics are provided (e.g. test statistics, degrees of freedom)
    • Impute and/or approximate from available information
  • Some haphazard combination of the above
    • Do your best, success not guaranteed
  • Nothing is provided, everything is awful
    • Email the authors, probably be ignored

Calculating and converting among effects

in R

 

  • metafor::escalc(): Calculate various effect sizes or outcome measures (and the corresponding sampling variances) that are commonly used in meta-analyses. When you have rich summary data.

 

  • Package compute.es: For calculating the most widely used effect sizes (ES), along with their variances, confidence intervals and p-values. When you need to convert among effects or impute from test statistics.

Calculating effects

Example: from data to g

….the addition of nitrogen-rich fertilizer had a strong positive effect on yield (control = 242 \(\pm\) 9.73, treat = 259 \(\pm\) 9.72 kg/ha).

 

metafor::escalc('SMD', m1i = 259, sd1i = 9.73, n1i = 65, 
                       m2i = 242, sd2i = 9.72, n2i = 65)
## 
##       yi     vi 
## 1 1.7378 0.0424

Converting among effects

Example: from r to g

…across the surveyed farms, we found a strong correlation between nitrogen content and crop yield (r = 0.62, n = 33).

Converting among effects

compute.es::res(r = 0.62, n = 33)
## Mean Differences ES: 
##  
##  d [ 95 %CI] = 1.58 [ 0.7 , 2.46 ] 
##   var(d) = 0.2 
##   p-value(d) = 0 
##   U3(d) = 94.3 % 
##   CLES(d) = 86.81 % 
##   Cliff's Delta = 0.74 
##  
##  Correlation ES: 
##  
##  r [ 95 %CI] = 0.62 [ 0.35 , 0.79 ] 
##   var(r) = 0.01 
##   p-value(r) = 0 
##  
##  z [ 95 %CI] = 0.73 [ 0.37 , 1.08 ] 
##   var(z) = 0.03 
##   p-value(z) = 0 
##  
##  Odds Ratio ES: 
##  
##  OR [ 95 %CI] = 17.58 [ 3.54 , 87.23 ] 
##   p-value(OR) = 0 
##  
##  Log OR [ 95 %CI] = 2.87 [ 1.26 , 4.47 ] 
##   var(lOR) = 0.67 
##   p-value(Log OR) = 0 
##  
##  Other: 
##  
##  NNT = 1.75 
##  Total N = 33

 

Meta-analytic (statistical) models

The story so far

We have:

  1. Formulated a question and designed a search
  2. Scoured and collated the existing literature
  3. Extracted numerical outcome measures from every study
  4. Now what?

Now what?

Two goals:

  1. Estimate overall effect
  2. Assess and explore heterogeneity between studies

Now what?

Two goals:

  1. Estimate overall effect
  2. Assess and explore heterogeneity between studies

Meta-analytic models

Two types of statistical model

 

  • Fixed or Common-effects model
  • Random-effects model

 

The decision is informed by the question and nature of the constituent studies

Meta-analytic models

  • The common-effects model assumes there is one true effect that underlies all the studies in the analysis. All differences in observed effects among individual studies are due to random sampling error alone.

  • The random-effects model assumes the true effect differs from study to study. Differences in observed effects among studies are due to random sampling error, as well as between-study heterogeneity in true effects. You’ll almost always use this.

The ‘metafor’ package

 

  • A comprehensive (or near enough) package for meta-analysis modelling in R
  • Extensive, extremely helpful package documentation, as well as a website: http://www.metafor-project.org/doku.php
  • Created by Wolfgang Viechtbauer

The ‘metafor’ package

Some key functions

 

  • escalc(): For effect size calculation
  • rma(): The workhorse for linear meta-analytic modelling (fixed/random/mixed-effects)
  • rma.mv(): For multilevel linear modelling (fixed/random/mixed-effects)

Some data

Are aposematic signals honest?

 

  • Aposematic signals = ‘warning’ signals
  • Prediction: signals are honest guides to strength of chemical defences
  • Data: correlations (Pearson’s r) between measures of colour signal expression and toxicity (112 effects/22 studies)
  • meta_warning in your datasets folder

Running a random-effects model

Inspect the data

names(dat_warning)
##  [1] "author"      "year"        "obs"         "study"       "group"       "study_type"  "class"       "subclass"   
##  [9] "order"       "suborder"    "family"      "genus"       "species"     "tox_measure" "col_var"     "n"          
## [17] "r"

Running a random-effects model

Inspect the data

dat_warning[, 16:17]
## # A tibble: 122 × 2
##        n     r
##    <dbl> <dbl>
##  1    39  0.19
##  2    36  0.22
##  3    36  0.12
##  4    36  0.38
##  5    37  0.03
##  6    37  0.22
##  7    39 -0.09
##  8    36  0.3 
##  9    36  0.21
## 10    36 -0.14
## # ℹ 112 more rows

Running a random-effects model

Convert our correlation coefficient (Pearson’s r) to the more suitable Fisher’s z

dat_warning <- escalc(measure = 'ZCOR', ri = r, ni = n, data = dat_warning)

Running a random-effects model

Have another look at the data

dat_warning[, 16:19]
## 
##       n     r      yi     vi 
## 1    39  0.19  0.1923 0.0278 
## 2    36  0.22  0.2237 0.0303 
## 3    36  0.12  0.1206 0.0303 
## 4    36  0.38  0.4001 0.0303 
## 5    37  0.03  0.0300 0.0294 
## 6    37  0.22  0.2237 0.0294 
## 7    39 -0.09 -0.0902 0.0278 
## 8    36  0.30  0.3095 0.0303 
## 9    36  0.21  0.2132 0.0303 
## 10   36 -0.14 -0.1409 0.0303 
## 11   37 -0.11 -0.1104 0.0294 
## 12   37 -0.19 -0.1923 0.0294 
## 13   30  0.41  0.4356 0.0370 
## 14   17 -0.03 -0.0300 0.0714 
## 15   25 -0.19 -0.1923 0.0455 
## 16   22  0.41  0.4356 0.0526 
## 17   27  0.61  0.7089 0.0417 
## 18   22  0.16  0.1614 0.0526 
## 19   27  0.56  0.6328 0.0417 
## 20   20  0.59  0.6777 0.0588 
## 21   20  0.63  0.7414 0.0588 
## 22   20  0.03  0.0300 0.0588 
## 23   20  0.02  0.0200 0.0588 
## 24   20  0.33  0.3428 0.0588 
## 25   20 -0.53 -0.5901 0.0588 
## 26   20  0.02  0.0200 0.0588 
## 27   20  0.25  0.2554 0.0588 
## 28   32  0.61  0.7089 0.0345 
## 29   32  0.57  0.6475 0.0345 
## 30   32 -0.02 -0.0200 0.0345 
## 31   32  0.29  0.2986 0.0345 
## 32   32  0.14  0.1409 0.0345 
## 33   32  0.22  0.2237 0.0345 
## 34   22  0.05  0.0500 0.0526 
## 35   22  0.15  0.1511 0.0526 
## 36   22  0.28  0.2877 0.0526 
## 37   22  0.42  0.4477 0.0526 
## 38   22  0.47  0.5101 0.0526 
## 39   23  0.27  0.2769 0.0500 
## 40   23  0.31  0.3205 0.0500 
## 41   23  0.15  0.1511 0.0500 
## 42   23  0.22  0.2237 0.0500 
## 43   23  0.26  0.2661 0.0500 
## 44   25  0.20  0.2027 0.0455 
## 45   25  0.09  0.0902 0.0455 
## 46   25  0.20  0.2027 0.0455 
## 47   25  0.37  0.3884 0.0455 
## 48   25  0.48  0.5230 0.0455 
## 49    9 -0.27 -0.2769 0.1667 
## 50    9 -0.33 -0.3428 0.1667 
## 51    9 -0.04 -0.0400 0.1667 
## 52    9 -0.15 -0.1511 0.1667 
## 53    9 -0.28 -0.2877 0.1667 
## 54    9 -0.37 -0.3884 0.1667 
## 55   11  0.78  1.0454 0.1250 
## 56   11  0.50  0.5493 0.1250 
## 57   11  0.48  0.5230 0.1250 
## 58   11  0.53  0.5901 0.1250 
## 59   11  0.08  0.0802 0.1250 
## 60   11  0.55  0.6184 0.1250 
## 61   10  0.78  1.0454 0.1429 
## 62   10  0.92  1.5890 0.1429 
## 63   25  0.10  0.1003 0.0455 
## 64   16  0.08  0.0802 0.0769 
## 65   16  0.40  0.4236 0.0769 
## 66   23  0.37  0.3884 0.0500 
## 67   23 -0.17 -0.1717 0.0500 
## 68   11 -0.45 -0.4847 0.1250 
## 69   11 -0.48 -0.5230 0.1250 
## 70   19  0.84  1.2212 0.0625 
## 71   24  0.83  1.1881 0.0476 
## 72   10  0.86  1.2933 0.1429 
## 73   21  0.70  0.8673 0.0556 
## 74   21  0.55  0.6184 0.0556 
## 75   21  0.53  0.5901 0.0556 
## 76   21  0.67  0.8107 0.0556 
## 77   18  0.71  0.8872 0.0667 
## 78   18 -0.44 -0.4722 0.0667 
## 79   18  0.25  0.2554 0.0667 
## 80   13  0.23  0.2342 0.1000 
## 81   13  0.42  0.4477 0.1000 
## 82   13 -0.18 -0.1820 0.1000 
## 83   10 -0.34 -0.3541 0.1429 
## 84   10  0.21  0.2132 0.1429 
## 85   10  0.19  0.1923 0.1429 
## 86    5 -0.95 -1.8318 0.5000 
## 87    5 -0.47 -0.5101 0.5000 
## 88    5 -0.26 -0.2661 0.5000 
## 89    5  0.25  0.2554 0.5000 
## 90   43  0.13  0.1307 0.0250 
## 91   43  0.04  0.0400 0.0250 
## 92   43  0.07  0.0701 0.0250 
## 93   43  0.13  0.1307 0.0250 
## 94  104  0.54  0.6042 0.0099 
## 95  104  0.29  0.2986 0.0099 
## 96    6  0.69  0.8480 0.3333 
## 97    6 -0.07 -0.0701 0.3333 
## 98    6  0.48  0.5230 0.3333 
## 99    6  0.51  0.5627 0.3333 
## 100   7  0.25  0.2554 0.2500 
## 101  14  0.22  0.2237 0.0909 
## 102  14 -0.58 -0.6625 0.0909 
## 103  14 -0.56 -0.6328 0.0909 
## 104  14 -0.61 -0.7089 0.0909 
## 105  12  0.05  0.0500 0.1111 
## 106   6  0.24  0.2448 0.3333 
## 107   6 -0.18 -0.1820 0.3333 
## 108   6 -0.17 -0.1717 0.3333 
## 109   6 -0.07 -0.0701 0.3333 
## 110   6 -0.01 -0.0100 0.3333 
## 111  18 -0.01 -0.0100 0.0667 
## 112  18  0.15  0.1511 0.0667 
## 113  18  0.22  0.2237 0.0667 
## 114  18  0.26  0.2661 0.0667 
## 115  18  0.19  0.1923 0.0667 
## 116   5 -0.67 -0.8107 0.5000 
## 117   5  0.77  1.0203 0.5000 
## 118   5  0.40  0.4236 0.5000 
## 119   5  0.42  0.4477 0.5000 
## 120   5  0.86  1.2933 0.5000 
## 121   8  0.83  1.1881 0.2000 
## 122   8  0.76  0.9962 0.2000

Running a random-effects model

m_random <- rma(yi = yi, vi = vi, method = "REML", weighted = TRUE, data = dat_warning)
  • yi: The vector containing our effect sizes
  • vi: The vector containing the associated variance for each effect size
  • method: REML for random-effects estimation (many options, but that’s the default & a good choice)
  • weighted: Should effect sizes be weighted during estimation? (Default is TRUE, but we’ll be explicit).
  • data: the dataset containing all this

Interpreting a random-effects model

Take a look at the results

m_random
## 
## Random-Effects Model (k = 122; tau^2 estimator: REML)
## 
## tau^2 (estimated amount of total heterogeneity): 0.0803 (SE = 0.0191)
## tau (square root of estimated tau^2 value):      0.2834
## I^2 (total heterogeneity / total variability):   58.26%
## H^2 (total variability / sampling variability):  2.40
## 
## Test for Heterogeneity:
## Q(df = 121) = 280.1755, p-val < .0001
## 
## Model Results:
## 
## estimate      se    zval    pval   ci.lb   ci.ub      
##   0.2414  0.0361  6.6852  <.0001  0.1706  0.3122  *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

N.B. This is still Fisher’s z, but we could transform back to r using metafor::transf.ztor()

Interpreting a common-effects model

Are aposematic signals honest?

 

 

Mean overall correlation between colour signal expression & toxicity under a random-effects model is z = 0.241 ± 0.037 [0.171, 0.312].

Visualising your model: the forest plot

A forest plot displays the effect size estimates of individual contributing studies, as well as the overall mean ± variance. Can use the forest() function from metafor for a quick (and fairly ugly) plot.

# For a subset only
forest(m_subset, slab = paste(dat_subset$author, dat_subset$year))

Visualising your model: the forest plot

A forest plot displays the effect size estimates of individual contributing studies, as well as the overall mean ± variance. Can use the forest() function from metafor for a quick (and fairly ugly) plot.

 

Quantifying heterogeneity

Now what?

Two goals:

  1. Estimate overall effect
  2. Assess and explore heterogeneity between studies

What is heterogeneity?

Heterogeneity is variation in effect sizes, beyond sampling error

 

 

Why is heterogeneity interesting?

Eg: We synthesise 32 experimental studies examining whether the presence of herbivores can help to prevent the establishment of newly-invasive plants. We find a moderate reduction in the survival of exotic plants in the presence of herbivores (SMD = 0.36 ± 0.08), and again high heterogeneity (\(I^2\) = 86%).

  • ☑ Do herbivores help?
  • ☐ Are the effects density dependent?
  • ☐ Are the differences among specialist & generalist herbivores?
  • ☐ Are there differences among herbivore taxa (e.g. vertebrate vs invertebrate)?

The point is

Often heterogeneity is equally or more interesting than overall mean effects.

 

Especially in ecology & evolution & ag, where it can be leveraged to test basic and applied theory via a priori predictions.

 

Quantifying heterogeneity

Common statistics (and rough definitions)

 

\(\tau\): The standard deviation of underlying effects across studies. Units = effect size.

\(\tau^2\): The variance of underlying effects across studies. Units = effect size.

\(I^2:\) The amount of variation in effect sizes that cannot be explained by sampling error. Units = 0 - 100%.

\(95\%\ CI\): The precision of the estimated mean effect (confidence interval).

\(95\%\ PI\): The dispersion of effects around the mean (prediction interval).

Quantifying heterogeneity

Colditz et al. (1994) J. Am. Med. Assoc

 

  • Aim: Examine the overall effectiveness of the BCG vaccine for preventing tuberculosis and to examine moderators that may potentially influence the size of the effect.
  • Summary: Contains the results from 13 studies examining the effectiveness of the Bacillus Calmette-Guerin (BCG) vaccine against tuberculosis, conducted using different methods of treatment allocation, at different times, and across different study populations & locations.

Quantifying heterogeneity

# Load our library
library(metafor)

# Load data on bcg and calculate effect size (odds ratio)
bcg_data <- escalc(measure = "OR", ai = tpos, bi = tneg, ci = cpos, di = cneg, data = dat.bcg)

# Run a random-effects model
m_bcg <- rma(yi, vi, method = 'REML', data = bcg_data)

# And inspect the result
m_bcg

Quantifying heterogeneity

 

What is the overall mean effect of the BCG vaccine on tuberculosis prevalence? How heterogeneous are the effects between studies? (Where is our prediction interval?).

Quantifying heterogeneity

# Confidence and prediction intervals
predict(m_bcg)
## 
##     pred     se   ci.lb   ci.ub   pi.lb  pi.ub 
##  -0.7452 0.1860 -1.1098 -0.3806 -1.9412 0.4508

Meta-regression

  • We have our effects
  • We know they vary
  • We want to know the source(s) of this variation
  • We have hypotheses as to the causes of such variation
  • How do we test them?

Meta-regression in action

Forest plot for BCG data

Meta-regression in action

Given all that heterogeneity, we may like to consider the influence of moderators via meta-regression, by extending our random-effects model to become a mixed-effects model. As part of the dataset, the authors also recorded the following information about each study:

  • ablat: absolute latitude of the study location (in degrees)
  • alloc: method of treatment allocation (random, alternate, or systematic assignment)
  • year: publication year

Hypotheses?

Meta-regression in action

Lets test our hypothesised moderator using a mixed-effects model. In metafor, we can specify this via:

bcg_me <- rma(yi, vi, mods = ~ ablat - 1, data = bcg_data)

And check out the results:

Meta-regression in action

  • \(R^2\): Model ‘fit’ ~85%.
  • intrcpt: LOR is ~0 when ablat = 0 (i.e. treatment less effective/ineffective at the equator)
  • ablat: Moderate, negative effect of ablat on LOR estimates. Estimated (average) log odds ratio becomes increasingly negative (i.e. treatment effect is greater) for study sites further from the equator.

Meta-regression in action

  • Is the Bacillus Calmette-Guerin (BCG) vaccine effective in preventing tuberculosis infections?

Yes, the sum-total evidence to date (well, ~1994) suggests so.

  • Is there variation in vaccine efficacy and, if so, what drives it?

Yes, A moderate amount of variation is explained by the latitude of the population to whom the vaccine is administered.

 

Thanks! But continue for ‘Assessing publication bias’ slides…

Model checking & publication bias

Non-exhaustive, example sources of bias:

  • Indexing/search bias: few search databases, improper terms
  • Selection bias: selective, post-hoc inclusion of studies
  • Detection bias: differences between groups within studies in how outcomes are determined
  • Attrition bias: differences between groups in withdrawals from a study
  • Reporting bias: differences between reported and unreported findings
  • Publication bias: studies with positive and ‘significant’ results are more likely to be published (i.e. the file-drawer effect)

Assessing publication bias: the funnel plot

The small proportion of results chosen for publication are unrepresentative of scientists’ repeated samplings of the real world.” - Young et al. PL0S Medicine

 

We can takes steps (grey literature, design effective and broad searches), but ultimately can’t change the underlying forces. But we have tools at our disposal to explore such possible effects.

Assessing publication bias: the funnel plot

A funnel plot is a scatter plot of each study’s effect size vs a measure of the study’s precision (or size).
funnel(m_random)

Assessing publication bias: the funnel plot

The logic:

  • Small, underpowered studies tend to report larger effect sizes compared with well-powered studies.
  • A plot of effects vs precision should therefore form a ‘funnel’, with larger studies converging on the meta-analytic mean and smaller studies scattered symmetrically around it.
  • Asymmetry may therefore indicate publication bias. If non-significant, imprecisely estimated effects are relegated to the file-drawer, then ‘holes’ will appear in the funnel plot.

Assessing publication bias: the funnel plot

Some notes of caution

  • Funnel plots are not definitive tests of publication bias. There are many forms of publication bias, and several possible causes of funnel plot asymmetry.
  • Hence, trim-and-fill analyses do not generate ‘corrected’ or more ‘valid’ estimates of the overall effect. They are a useful way of examining the sensitivity of the results to one particular form of publication bias.

 

Thanks!